Regularized Total Least Squares: Computational Aspects and Error Bounds

نویسندگان

  • Shuai Lu
  • Sergei V. Pereverzyev
  • Ulrich Tautenhahn
چکیده

For solving linear ill-posed problems regularization methods are required when the right hand side and the operator are with some noise. In the present paper regularized approximations are obtained by regularized total least squares and dual regularized total least squares. We discuss computational aspects and provide order optimal error bounds that characterize the accuracy of the regularized approximations. The results extend earlier results where the operator is exactly given. We also present some numerical experiments, which shed a light on the relationship between RTLS, dual RTLS and the standard Tikhonov regularization.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On a quadratic eigenproblem occurring in regularized total least squares

In a recent paper Sima, Van Huffel and Golub [Regularized total least squares based on quadratic eigenvalue problem solvers. BIT Numerical Mathematics 44, 793 812 (2004)] suggested a computational approach for solving regularized total least squares problems via a sequence of quadratic eigenvalue problems. Taking advantage of a variational characterization of real eigenvalues of nonlinear eigen...

متن کامل

Algorithmic Stability and Meta-Learning

A mechnism of transfer learning is analysed, where samples drawn from different learning tasks of an environment are used to improve the learners performance on a new task. We give a general method to prove generalisation error bounds for such meta-algorithms. The method can be applied to the bias learning model of J. Baxter and to derive novel generalisation bounds for metaalgorithms searching...

متن کامل

Less is More: Nyström Computational Regularization

We study Nyström type subsampling approaches to large scale kernel methods, and prove learning bounds in the statistical learning setting, where random sampling and high probability estimates are considered. In particular, we prove that these approaches can achieve optimal learning bounds, provided the subsampling level is suitably chosen. These results suggest a simple incremental variant of N...

متن کامل

Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression

We consider the optimization of a quadratic objective function whose gradients are only accessible through a stochastic oracle that returns the gradient at any given point plus a zero-mean finite variance random error. We present the first algorithm that achieves jointly the optimal prediction error rates for least-squares regression, both in terms of forgetting the initial conditions in O(1/n)...

متن کامل

SPARLS: A Low Complexity Recursive $\mathcal{L}_1$-Regularized Least Squares Algorithm

We develop a Recursive L1-Regularized Least Squares (SPARLS) algorithm for the estimation of a sparse tap-weight vector in the adaptive filtering setting. The SPARLS algorithm exploits noisy observations of the tap-weight vector output stream and produces its estimate using an ExpectationMaximization type algorithm. Simulation studies in the context of channel estimation, employing multipath wi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • SIAM J. Matrix Analysis Applications

دوره 31  شماره 

صفحات  -

تاریخ انتشار 2009